Segmented Addressing Solves the Virtual Cache Synonym Problem

نویسنده

  • Bruce Jacob
چکیده

If one is interested solely in processor speed, one must use virtually-indexed caches. The traditional purported weakness of virtual caches is their inability to support shared memory. Many implementations of shared memory are at odds with virtual caches—ASID aliasing and virtual-address aliasing (techniques used to provide shared memory) can cause false cache misses and/or give rise to data inconsistencies in a virtual cache, but are necessary features of many virtual memory implementations. By appropriately using a segmented architecture one can solve these problems. In this tech report we describe a virtual memory system developed for a segmented microarchitecture and present the following benefits derived from such an organization: (a) the need to flush virtual caches can be eliminated, (b) virtual cache consistency management can be eliminated, (c) page table space requirements can be cut in half by eliminating the need to replicate page table entries for shared pages, and (d) the virtual memory system can be made less complex because it does not have to deal with the virtual-cache synonym problem. 1 INTRODUCTION Virtual caches allow faster processing in the common case because they do not require address translation when requested data is found in the caches. They are not used in many architectures despite their apparent simplicity because they have several potential pitfalls that need careful management [10, 16, 29]. In previous research on high clock-rate PowerPC designs, we discovered that the segmented memory management architecture of the PowerPC works extremely well with a virtual cache organization and an appropriate virtual memory organization, eliminating the need for virtual-cache management and allowing the operating system to minimize the space requirements for the page table. Though it might seem obvious that segmentation can solve the problems of a virtual cache organization, we note that several contemporary microarchitectures use segmented addressing mechanisms—including PA-RISC [12], PowerPC [15], POWER2 [28], and x86 [17]—while only PA-RISC and POWER2 take advantage of a virtual cache. Management of the virtual cache can be avoided entirely if sharing is implemented through the global segmented space. This gives the same benefits as single address-space operating systems (SASOS): if virtual-address aliasing (allowing processes to use different virtual addresses for the same physical data) is eliminated, then so is the virtual-cache synonym problem [10]. Thus, consistency management of the virtual cache can be eliminated by a simple operating-system organization. The advantage of a segmented approach (as opposed to a SASOS approach) …

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

U-cache: A cost-effective solution to the virtual cache synonym problem

This paper proposes a cost-eeective solution to the virtual cache synonym problem. In the proposed solution, a minimal hardware addition guarantees correct handling of the synonym problem whereas a simple modiication to the virtual-to-physical address mapping in the operating system optimizes the performance. The key to the proposed solution is a small physically-indexed cache called a U-cache....

متن کامل

Reducing Address Translation Overheads with Virtual Caching

This dissertation research addresses overheads in supporting virtual memory, especially virtual-to-physical address translation overheads (i.e., performance, power, and energy) via a Translation Lookaside Buffer (TLB). To overcome the overheads, we revisit virtually indexed, virtually tagged caches. In practice, they have not been common in commercial microarchitecture designs, and the crux of ...

متن کامل

U-Cache: A Cost-Effective Solution to the Synonym Problem

This paper proposes a cost-eeective solution to the synonym problem. In this proposed solution, a minimal hardware addition guarantees the correctness whereas the software counterpart helps improve the performance. The key to this proposed solution is an addition of a small physically-indexed cache called U-cache. The U-cache maintains the reverse translation information of the cache blocks tha...

متن کامل

V-P cache: a storage efficient virtual cache organization

Introduction In recent several years, processor speeds have improved dramatically. This drastic increase in processor speeds makes a cache with a very fast hit time a must for high performance computer systems. The above trends make the use of direct-mapped virtual caches very attractive since they inherit the hit time advantage from both types (i.e., direct-mapped and virtual) of caches. Howev...

متن کامل

A memory management unit and cache controller for the MARS systeiil

For large caches, the interaction between cache access and address translation affects the machine cycle time and the access time to memory. The physically addressed caches slow down the cache access due to the virtual address translation. The virtually addressed caches is faster, but the synonym problem is difficult to handle. By some software constraints and hardware support, our virtually ad...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1997